Effects of Reduced Precision on Floating-Point SVM Classification Accuracy
نویسندگان
چکیده
There is growing interest in performing ever more complex classification tasks on mobile and embedded devices in real-time, which results in the need for efficient implementations of the respective algorithms. Support vector machines (SVMs) represent a powerful class of nonlinear classifiers, and reducing the working precision represents a promising approach to achieving efficient implementations of the SVM classification phase. However, the relationship between SVM classification accuracy and the arithmetic precision used is not yet sufficiently understood. We investigate this relationship in floating-point arithmetic and illustrate that often a large reduction in the working precision of the classification process is possible without loss in classification accuracy. Moreover, we investigate the adaptation of bounds on allowable SVM parameter perturbations in order to estimate the lowest possible working precision in floating-point arithmetic. Among the three representative data sets considered in this paper, none requires a precision higher than 15 bit, which is a considerable reduction from the 53 bit used in double precision floating-point arithmetic. Furthermore, we demonstrate analytic bounds on the working precision for SVMs with Gaussian kernel providing good predictions of possible reductions in the working precision without sacrificing classification accuracy.
منابع مشابه
Understanding the Energy and Precision Requirements for Online Learning
It is well-known that the precision of data, hyperparameters, and internal representations employed in learning systems directly impacts their energy, throughput, and latency. The precision requirements for the training algorithm are also important for systems that learn on-the-fly. Prior work has shown that the data and hyperparameters can be quantized heavily without incurring much penalty in...
متن کاملFace Recognition using Eigenfaces , PCA and Supprot Vector Machines
This paper is based on a combination of the principal component analysis (PCA), eigenface and support vector machines. Using N-fold method and with respect to the value of N, any person’s face images are divided into two sections. As a result, vectors of training features and test features are obtain ed. Classification precision and accuracy was examined with three different types of kernel and...
متن کاملAutomated Floating-Point Precision Analysis
Title of dissertation: Automated Floating-Point Precision Analysis Michael O. Lam, Doctor of Philosophy, 2014 Dissertation directed by: Professor Jeffrey K. Hollingsworth Department of Computer Science As scientific computation continues to scale upward, correct and efficient use of floating-point arithmetic is crucially important. Users of floating-point arithmetic encounter many problems, inc...
متن کاملImproving Text Classification with Vectors of Reduced Precision
This paper presents the analysis of the impact of a floating-point number precision reduction on the quality of text classification. The precision reduction of the vectors representing the data (e.g. TF–IDF representation in our case) allows for a decrease of computing time and memory footprint on dedicated hardware platforms. The impact of precision reduction on the classification quality was ...
متن کاملA hardware MP3 decoder with low precision floating point intermediate storage
The effects of using limited precision floating point for intermediate storage in an embedded MP3 decoder are investigated in this thesis. The advantages of using limited precision is that the values need shorter word lengths and thus a smaller memory for storage. The official reference decoder was modified so that the effects of different word lengths and algorithms could be examined. Finally,...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2011